457 research outputs found

    Human Pose Estimation using Deep Consensus Voting

    Full text link
    In this paper we consider the problem of human pose estimation from a single still image. We propose a novel approach where each location in the image votes for the position of each keypoint using a convolutional neural net. The voting scheme allows us to utilize information from the whole image, rather than rely on a sparse set of keypoint locations. Using dense, multi-target votes, not only produces good keypoint predictions, but also enables us to compute image-dependent joint keypoint probabilities by looking at consensus voting. This differs from most previous methods where joint probabilities are learned from relative keypoint locations and are independent of the image. We finally combine the keypoints votes and joint probabilities in order to identify the optimal pose configuration. We show our competitive performance on the MPII Human Pose and Leeds Sports Pose datasets

    HoughNet: Integrating Near and Long-Range Evidence for Bottom-Up Object Detection

    Get PDF
    © 2020, Springer Nature Switzerland AG.This paper presents HoughNet, a one-stage, anchor-free, voting-based, bottom-up object detection method. Inspired by the Generalized Hough Transform, HoughNet determines the presence of an object at a certain location by the sum of the votes cast on that location. Votes are collected from both near and long-distance locations based on a log-polar vote field. Thanks to this voting mechanism, HoughNet is able to integrate both near and long-range, class-conditional evidence for visual recognition, thereby generalizing and enhancing current object detection methodology, which typically relies on only local evidence. On the COCO dataset, HoughNet’s best model achieves 46.4 AP (and 65.1 AP50), performing on par with the state-of-the-art in bottom-up object detection and outperforming most major one-stage and two-stage methods. We further validate the effectiveness of our proposal in another task, namely, “labels to photo” image generation by integrating the voting module of HoughNet to two different GAN models and showing that the accuracy is significantly improved in both cases. Code is available at https://github.com/nerminsamet/houghnet

    Cognitive decline in Parkinson disease

    Get PDF
    This is the author accepted manuscript. the final version is available from Nature Research via the DOI in this recordDementia is a frequent problem encountered in advanced stages of Parkinson disease (PD). In recent years, research has focused on the pre-dementia stages of cognitive impairment in PD, including mild cognitive impairment (MCI). Several longitudinal studies have shown that MCI is a harbinger of dementia in PD, although the course is variable, and stabilization of cognition — or even reversal to normal cognition — is not uncommon. In addition to limbic and cortical spread of Lewy pathology, several other mechanisms are likely to contribute to cognitive decline in PD, and a variety of biomarker studies, some using novel structural and functional imaging techniques, have documented in vivo brain changes associated with cognitive impairment. The evidence consistently suggests that low cerebrospinal fluid levels of amyloid-β42, a marker of comorbid Alzheimer disease (AD), predict future cognitive decline and dementia in PD. Emerging genetic evidence indicates that in addition to the APOE*ε4 allele (an established risk factor for AD), GBA mutations and SCNA mutations and triplications are associated with cognitive decline in PD, whereas the findings are mixed for MAPT polymorphisms. Cognitive enhancing medications have some effect in PD dementia, but no convincing evidence that progression from MCI to dementia can be delayed or prevented is available, although cognitive training has shown promising results.National Institute for Health Research (NIHR)Royal SocietyWolfson Foundatio

    Acquisition vs. Memorization Trade-Offs Are Modulated by Walking Distance and Pattern Complexity in a Large-Scale Copying Paradigm

    Get PDF
    In a “block-copying paradigm”, subjects were required to copy a configuration of colored blocks from a model area to a distant work area, using additional blocks provided at an equally distant resource area. Experimental conditions varied between the inter-area separation (walking distance) and the complexity of the block patterns to be copied. Two major behavioral strategies were identified: in the memory-intensive strategy, subjects memorize large parts of the pattern and rebuild them without intermediate visits at the model area. In the acquisition-intensive strategy, subjects memorize one block at a time and return to the model after having placed this block. Results show that the frequency of the memory-intensive strategy is increased for larger inter-area separations (larger walking distances) and for simpler block patterns. This strategy-shift can be interpreted as the result of an optimization process or trade-off, minimizing combined, condition-dependent costs of the two strategies. Combined costs correlate with overall response time. We present evidence that for the memory-intensive strategy, costs correlate with model visit duration, while for the acquisition-intensive strategy, costs correlate with inter-area transition (i.e., walking) times

    The psychosis spectrum in Parkinson disease

    Get PDF
    This is the author accepted manuscript. The final version is available from the publisher via the DOI in this recordIn 2007, the clinical and research profile of illusions, hallucinations, delusions and related symptoms in Parkinson disease (PD) was raised with the publication of a consensus definition of PD psychosis. Symptoms that were previously deemed benign and clinically insignificant were incorporated into a continuum of severity, leading to the rapid expansion of literature focusing on clinical aspects, mechanisms and treatment. Here, we review this literature and the evolving view of PD psychosis. Key topics include the prospective risk of dementia in individuals with PD psychosis, and the causal and modifying effects of PD medication. We discuss recent developments, including recognition of an increase in the prevalence of psychosis with disease duration, addition of new visual symptoms to the psychosis continuum, and identification of frontal executive, visual perceptual and memory dysfunction at different disease stages. In addition, we highlight novel risk factors-for example, autonomic dysfunction-that have emerged from prospective studies, structural MRI evidence of frontal, parietal, occipital and hippocampal involvement, and approval of pimavanserin for the treatment of PD psychosis. The accumulating evidence raises novel questions and directions for future research to explore the clinical management and biomarker potential of PD psychosis.National Institute for Health Research (NIHR

    Characterization of digital medical images utilizing support vector machines

    Get PDF
    BACKGROUND: In this paper we discuss an efficient methodology for the image analysis and characterization of digital images containing skin lesions using Support Vector Machines and present the results of a preliminary study. METHODS: The methodology is based on the support vector machines algorithm for data classification and it has been applied to the problem of the recognition of malignant melanoma versus dysplastic naevus. Border and colour based features were extracted from digital images of skin lesions acquired under reproducible conditions, using basic image processing techniques. Two alternative classification methods, the statistical discriminant analysis and the application of neural networks were also applied to the same problem and the results are compared. RESULTS: The SVM (Support Vector Machines) algorithm performed quite well achieving 94.1% correct classification, which is better than the performance of the other two classification methodologies. The method of discriminant analysis classified correctly 88% of cases (71% of Malignant Melanoma and 100% of Dysplastic Naevi), while the neural networks performed approximately the same. CONCLUSION: The use of a computer-based system, like the one described in this paper, is intended to avoid human subjectivity and to perform specific tasks according to a number of criteria. However the presence of an expert dermatologist is considered necessary for the overall visual assessment of the skin lesion and the final diagnosis

    3D time series analysis of cell shape using Laplacian approaches

    Get PDF
    Background: Fundamental cellular processes such as cell movement, division or food uptake critically depend on cells being able to change shape. Fast acquisition of three-dimensional image time series has now become possible, but we lack efficient tools for analysing shape deformations in order to understand the real three-dimensional nature of shape changes. Results: We present a framework for 3D+time cell shape analysis. The main contribution is three-fold: First, we develop a fast, automatic random walker method for cell segmentation. Second, a novel topology fixing method is proposed to fix segmented binary volumes without spherical topology. Third, we show that algorithms used for each individual step of the analysis pipeline (cell segmentation, topology fixing, spherical parameterization, and shape representation) are closely related to the Laplacian operator. The framework is applied to the shape analysis of neutrophil cells. Conclusions: The method we propose for cell segmentation is faster than the traditional random walker method or the level set method, and performs better on 3D time-series of neutrophil cells, which are comparatively noisy as stacks have to be acquired fast enough to account for cell motion. Our method for topology fixing outperforms the tools provided by SPHARM-MAT and SPHARM-PDM in terms of their successful fixing rates. The different tasks in the presented pipeline for 3D+time shape analysis of cells can be solved using Laplacian approaches, opening the possibility of eventually combining individual steps in order to speed up computations

    ImageParser: a tool for finite element generation from three-dimensional medical images

    Get PDF
    BACKGROUND: The finite element method (FEM) is a powerful mathematical tool to simulate and visualize the mechanical deformation of tissues and organs during medical examinations or interventions. It is yet a challenge to build up an FEM mesh directly from a volumetric image partially because the regions (or structures) of interest (ROIs) may be irregular and fuzzy. METHODS: A software package, ImageParser, is developed to generate an FEM mesh from 3-D tomographic medical images. This software uses a semi-automatic method to detect ROIs from the context of image including neighboring tissues and organs, completes segmentation of different tissues, and meshes the organ into elements. RESULTS: The ImageParser is shown to build up an FEM model for simulating the mechanical responses of the breast based on 3-D CT images. The breast is compressed by two plate paddles under an overall displacement as large as 20% of the initial distance between the paddles. The strain and tangential Young's modulus distributions are specified for the biomechanical analysis of breast tissues. CONCLUSION: The ImageParser can successfully exact the geometry of ROIs from a complex medical image and generate the FEM mesh with customer-defined segmentation information

    Pathway analysis comparison using Crohn's disease genome wide association studies

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The use of biological annotation such as genes and pathways in the analysis of gene expression data has aided the identification of genes for follow-up studies and suggested functional information to uncharacterized genes. Several studies have applied similar methods to genome wide association studies and identified a number of disease related pathways. However, many questions remain on how to best approach this problem, such as whether there is a need to obtain a score to summarize association evidence at the gene level, and whether a pathway, dominated by just a few highly significant genes, is of interest.</p> <p>Methods</p> <p>We evaluated the performance of two pathway-based methods (Random Set, and Binomial approximation to the hypergeometric test) based on their applications to three data sets of Crohn's disease. We consider both the disease status as a phenotype as well as the residuals after conditioning on IL23R, a known Crohn's related gene, as a phenotype.</p> <p>Results</p> <p>Our results show that Random Set method has the most power to identify disease related pathways. We confirm previously reported disease related pathways and provide evidence for IL-2 Receptor Beta Chain in T cell Activation and IL-9 signaling as Crohn's disease associated pathways.</p> <p>Conclusions</p> <p>Our results highlight the need to apply powerful gene score methods prior to pathway enrichment tests, and that controlling for genes that attain genome wide significance enable further biological insight.</p

    Heuristics-based detection to improve text/graphics segmentation in complex engineering drawings.

    Get PDF
    The demand for digitisation of complex engineering drawings becomes increasingly important for the industry given the pressure to improve the efficiency and time effectiveness of operational processes. There have been numerous attempts to solve this problem, either by proposing a general form of document interpretation or by establishing an application dependant framework. Moreover, text/graphics segmentation has been presented as a particular form of addressing document digitisation problem, with the main aim of splitting text and graphics into different layers. Given the challenging characteristics of complex engineering drawings, this paper presents a novel sequential heuristics-based methodology which is aimed at localising and detecting the most representative symbols of the drawing. This implementation enables the subsequent application of a text/graphics segmentation method in a more effective form. The experimental framework is composed of two parts: first we show the performance of the symbol detection system and then we present an evaluation of three different state of the art text/graphic segmentation techniques to find text on the remaining image
    corecore